732 research outputs found

    A unified framework for nonperforming loan modeling in bank production: An application of data envelopment analysis

    Get PDF
    The aim of this paper is to conduct a comparative analysis of three environmental approaches in the context of a bank production framework, considering the presence of nonperforming loans (NPLs). Specifically, we examine banks' inefficiency levels using the "by-production technology," "joint-weak disposable technology," and "material balanced technology." To ensure comparability within a directional slack inefficiency framework, we propose a two-step procedure. The study is based on a sample of 379 prominent banks operating in the United States from 2003 to 2017. Our findings reveal that the material balance and by-production technologies result in estimated inefficiency measures with higher sensitivity compared to the estimator utilizing the joint-weak disposable technology. Additionally, we identify distinct properties among the estimators, emphasizing their unique characteristics for modeling nonperforming loans. Finally, our paper sheds light on the differences between the three estimators in relation to banks' inefficiency levels, considering the incorporation of nonperforming loans in the production process

    A material balance approach for modelling banks’ production process with non-performing loans

    Get PDF
    The aim of this to study is to examine how non-performing loans on the balance sheets of Japanese banks affect their performance by adopting a material balance principle. The paper outlines how the material balance conditions can be applied when modelling banks’ production process in the presence of non-performing loans. The paper utilizes the generalized weak G-disposability principle which accounts for the heterogeneity among banks’ input quality. We test how an input-oriented model (non-performing loans are treated as an input), the weak disposability assumption and the adopted material balance approach, affect banks’ performance levels. We apply our test on a sample of Japanese banks over the period 2013 to 2019. Our findings indicate that the input-oriented model and the material balance estimator even if they present similar distributions, they account differently the effect of non-performing loans’ fluctuations over the examined period. In addition, the results under the weak disposability assumption are found to be different compared to the material balance measures and less sensitive to banks’ non-performing loans variation levels. We also provide evidence that the generalized weak G-disposability assumption captures better banks’ performance fluctuations that has been caused by the restructuring of the Japanese banking industry

    Inductive queries for a drug designing robot scientist

    Get PDF
    It is increasingly clear that machine learning algorithms need to be integrated in an iterative scientific discovery loop, in which data is queried repeatedly by means of inductive queries and where the computer provides guidance to the experiments that are being performed. In this chapter, we summarise several key challenges in achieving this integration of machine learning and data mining algorithms in methods for the discovery of Quantitative Structure Activity Relationships (QSARs). We introduce the concept of a robot scientist, in which all steps of the discovery process are automated; we discuss the representation of molecular data such that knowledge discovery tools can analyse it, and we discuss the adaptation of machine learning and data mining algorithms to guide QSAR experiments

    Do evolutionary algorithms indeed require random numbers? Extended study

    Get PDF
    An inherent part of evolutionary algorithms, that are based on Darwin theory of evolution and Mendel theory of genetic heritage, are random processes. In this participation, we discuss whether are random processes really needed in evolutionary algorithms. We use n periodic deterministic processes instead of random number generators and compare performance of evolutionary algorithms powered by those processes and by pseudo-random number generators. Deterministic processes used in this participation are based on deterministic chaos and are used to generate periodical series with different length. Results presented here are numerical demonstration rather than mathematical proofs. We propose that a certain class of deterministic processes can be used instead of random number generators without lowering of evolutionary algorithms performance. © Springer International Publishing Switzerland 2013

    Polarised target for Drell-Yan experiment in COMPASS at CERN, part I

    Full text link
    In the polarised Drell-Yan experiment at the COMPASS facility in CERN pion beam with momentum of 190 GeV/c and intensity about 10810^8 pions/s interacted with transversely polarised NH3_3 target. Muon pairs produced in Drel-Yan process were detected. The measurement was done in 2015 as the 1st ever polarised Drell-Yan fixed target experiment. The hydrogen nuclei in the solid-state NH3_3 were polarised by dynamic nuclear polarisation in 2.5 T field of large-acceptance superconducting magnet. Large helium dilution cryostat was used to cool the target down below 100 mK. Polarisation of hydrogen nuclei reached during the data taking was about 80 %. Two oppositely polarised target cells, each 55 cm long and 4 cm in diameter were used. Overview of COMPASS facility and the polarised target with emphasis on the dilution cryostat and magnet is given. Results of the polarisation measurement in the Drell-Yan run and overviews of the target material, cell and dynamic nuclear polarisation system are given in the part II.Comment: 4 pages, 2 figures, Proceedings of the 22nd International Spin Symposium, Urbana-Champaign, Illinois, USA, 25-30 September 201

    Algorithms for Colourful Simplicial Depth and Medians in the Plane

    Full text link
    The colourful simplicial depth of a point x in the plane relative to a configuration of n points in k colour classes is exactly the number of closed simplices (triangles) with vertices from 3 different colour classes that contain x in their convex hull. We consider the problems of efficiently computing the colourful simplicial depth of a point x, and of finding a point, called a median, that maximizes colourful simplicial depth. For computing the colourful simplicial depth of x, our algorithm runs in time O(n log(n) + k n) in general, and O(kn) if the points are sorted around x. For finding the colourful median, we get a time of O(n^4). For comparison, the running times of the best known algorithm for the monochrome version of these problems are O(n log(n)) in general, improving to O(n) if the points are sorted around x for monochrome depth, and O(n^4) for finding a monochrome median.Comment: 17 pages, 8 figure

    Quantum Sign Permutation Polytopes

    Get PDF
    Convex polytopes are convex hulls of point sets in the nn-dimensional space \E^n that generalize 2-dimensional convex polygons and 3-dimensional convex polyhedra. We concentrate on the class of nn-dimensional polytopes in \E^n called sign permutation polytopes. We characterize sign permutation polytopes before relating their construction to constructions over the space of quantum density matrices. Finally, we consider the problem of state identification and show how sign permutation polytopes may be useful in addressing issues of robustness

    A Novel Approach for Ellipsoidal Outer-Approximation of the Intersection Region of Ellipses in the Plane

    Get PDF
    In this paper, a novel technique for tight outer-approximation of the intersection region of a finite number of ellipses in 2-dimensional (2D) space is proposed. First, the vertices of a tight polygon that contains the convex intersection of the ellipses are found in an efficient manner. To do so, the intersection points of the ellipses that fall on the boundary of the intersection region are determined, and a set of points is generated on the elliptic arcs connecting every two neighbouring intersection points. By finding the tangent lines to the ellipses at the extended set of points, a set of half-planes is obtained, whose intersection forms a polygon. To find the polygon more efficiently, the points are given an order and the intersection of the half-planes corresponding to every two neighbouring points is calculated. If the polygon is convex and bounded, these calculated points together with the initially obtained intersection points will form its vertices. If the polygon is non-convex or unbounded, we can detect this situation and then generate additional discrete points only on the elliptical arc segment causing the issue, and restart the algorithm to obtain a bounded and convex polygon. Finally, the smallest area ellipse that contains the vertices of the polygon is obtained by solving a convex optimization problem. Through numerical experiments, it is illustrated that the proposed technique returns a tighter outer-approximation of the intersection of multiple ellipses, compared to conventional techniques, with only slightly higher computational cost

    Constraining spacetime torsion with LAGEOS

    Full text link
    We compute the corrections to the orbital Lense-Thirring effect (or frame-dragging) in the presence of spacetime torsion. We derive the equations of motion of a test body in the gravitational field of a rotating axisymmetric massive body, using the parametrized framework of Mao, Tegmark, Guth and Cabi. We calculate the secular variations of the longitudes of the node and of the pericenter. We also show how the LAser GEOdynamics Satellites (LAGEOS) can be used to constrain torsion parameters. We report the experimental constraints obtained using both the nodes and perigee measurements of the orbital Lense-Thirring effect. This makes LAGEOS and Gravity Probe B (GPB) complementary frame-dragging and torsion experiments, since they constrain three different combinations of torsion parameters
    • …
    corecore